14 research outputs found

    A system for modeling social traits in realistic faces with artificial intelligence

    Full text link
    Los seres humanos han desarrollado especialmente su capacidad perceptiva para procesar caras y extraer información de las características faciales. Usando nuestra capacidad conductual para percibir rostros, hacemos atribuciones tales como personalidad, inteligencia o confiabilidad basadas en la apariencia facial que a menudo tienen un fuerte impacto en el comportamiento social en diferentes dominios. Por lo tanto, las caras desempeñan un papel fundamental en nuestras relaciones con otras personas y en nuestras decisiones cotidianas. Con la popularización de Internet, las personas participan en muchos tipos de interacciones virtuales, desde experiencias sociales, como juegos, citas o comunidades, hasta actividades profesionales, como e-commerce, e-learning, e-therapy o e-health. Estas interacciones virtuales manifiestan la necesidad de caras que representen a las personas reales que interactúan en el mundo digital: así surgió el concepto de avatar. Los avatares se utilizan para representar a los usuarios en diferentes escenarios y ámbitos, desde la vida personal hasta situaciones profesionales. En todos estos casos, la aparición del avatar puede tener un efecto no solo en la opinión y percepción de otra persona, sino en la autopercepción, que influye en la actitud y el comportamiento del sujeto. De hecho, los avatares a menudo se emplean para obtener impresiones o emociones a través de expresiones no verbales, y pueden mejorar las interacciones en línea o incluso son útiles para fines educativos o terapéuticos. Por lo tanto, la posibilidad de generar avatares de aspecto realista que provoquen un determinado conjunto de impresiones sociales supone una herramienta muy interesante y novedosa, útil en un amplio abanico de campos. Esta tesis propone un método novedoso para generar caras de aspecto realistas con un perfil social asociado que comprende 15 impresiones diferentes. Para este propósito, se completaron varios objetivos parciales. En primer lugar, las características faciales se extrajeron de una base de datos de caras reales y se agruparon por aspecto de una manera automática y objetiva empleando técnicas de reducción de dimensionalidad y agrupamiento. Esto produjo una taxonomía que permite codificar de manera sistemática y objetiva las caras de acuerdo con los grupos obtenidos previamente. Además, el uso del método propuesto no se limita a las características faciales, y se podría extender su uso para agrupar automáticamente cualquier otro tipo de imágenes por apariencia. En segundo lugar, se encontraron las relaciones existentes entre las diferentes características faciales y las impresiones sociales. Esto ayuda a saber en qué medida una determinada característica facial influye en la percepción de una determinada impresión social, lo que permite centrarse en la característica o características más importantes al diseñar rostros con una percepción social deseada. En tercer lugar, se implementó un método de edición de imágenes para generar una cara totalmente nueva y realista a partir de una definición de rostro utilizando la taxonomía de rasgos faciales antes mencionada. Finalmente, se desarrolló un sistema para generar caras realistas con un perfil de rasgo social asociado, lo cual cumple el objetivo principal de la presente tesis. La principal novedad de este trabajo reside en la capacidad de trabajar con varias dimensiones de rasgos a la vez en caras realistas. Por lo tanto, en contraste con los trabajos anteriores que usan imágenes con ruido, o caras de dibujos animados o sintéticas, el sistema desarrollado en esta tesis permite generar caras de aspecto realista eligiendo los niveles deseados de quince impresiones: Miedo, Enfado, Atractivo, Cara de niño, Disgustado, Dominante, Femenino, Feliz, Masculino, Prototípico, Triste, Sorprendido, Amenazante, Confiable e Inusual. Los prometedores resultados obtenidos permitirán investigar más a fondo cómo modelar lHumans have specially developed their perceptual capacity to process faces and to extract information from facial features. Using our behavioral capacity to perceive faces, we make attributions such as personality, intelligence or trustworthiness based on facial appearance that often have a strong impact on social behavior in different domains. Therefore, faces play a central role in our relationships with other people and in our everyday decisions. With the popularization of the Internet, people participate in many kinds of virtual interactions, from social experiences, such as games, dating or communities, to professional activities, such as e-commerce, e-learning, e-therapy or e-health. These virtual interactions manifest the need for faces that represent the actual people interacting in the digital world: thus the concept of avatar emerged. Avatars are used to represent users in different scenarios and scopes, from personal life to professional situations. In all these cases, the appearance of the avatar may have an effect not only on other person's opinion and perception but on self-perception, influencing the subject's own attitude and behavior. In fact, avatars are often employed to elicit impressions or emotions through non-verbal expressions, and are able to improve online interactions or even useful for education purposes or therapy. Then, being able to generate realistic looking avatars which elicit a certain set of desired social impressions poses a very interesting and novel tool, useful in a wide range of fields. This thesis proposes a novel method for generating realistic looking faces with an associated social profile comprising 15 different impressions. For this purpose, several partial objectives were accomplished. First, facial features were extracted from a database of real faces and grouped by appearance in an automatic and objective manner employing dimensionality reduction and clustering techniques. This yielded a taxonomy which allows to systematically and objectively codify faces according to the previously obtained clusters. Furthermore, the use of the proposed method is not restricted to facial features, and it should be possible to extend its use to automatically group any other kind of images by appearance. Second, the existing relationships among the different facial features and the social impressions were found. This helps to know how much a certain facial feature influences the perception of a given social impression, allowing to focus on the most important feature or features when designing faces with a sought social perception. Third, an image editing method was implemented to generate a completely new, realistic face from just a face definition using the aforementioned facial feature taxonomy. Finally, a system to generate realistic faces with an associated social trait profile was developed, which fulfills the main objective of the present thesis. The main novelty of this work resides in the ability to work with several trait dimensions at a time on realistic faces. Thus, in contrast with the previous works that use noisy images, or cartoon-like or synthetic faces, the system developed in this thesis allows to generate realistic looking faces choosing the desired levels of fifteen impressions, namely Afraid, Angry, Attractive, Babyface, Disgusted, Dominant, Feminine, Happy, Masculine, Prototypical, Sad, Surprised, Threatening, Trustworthy and Unusual. The promising results obtained in this thesis will allow to further investigate how to model social perception in faces using a completely new approach.Els sers humans han desenvolupat especialment la seua capacitat perceptiva per a processar cares i extraure informació de les característiques facials. Usant la nostra capacitat conductual per a percebre rostres, fem atribucions com ara personalitat, intel·ligència o confiabilitat basades en l'aparença facial que sovint tenen un fort impacte en el comportament social en diferents dominis. Per tant, les cares exercixen un paper fonamental en les nostres relacions amb altres persones i en les nostres decisions quotidianes. Amb la popularització d'Internet, les persones participen en molts tipus d'inter- accions virtuals, des d'experiències socials, com a jocs, cites o comunitats, fins a activitats professionals, com e-commerce, e-learning, e-therapy o e-health. Estes interaccions virtuals manifesten la necessitat de cares que representen a les persones reals que interactuen en el món digital: així va sorgir el concepte d'avatar. Els avatars s'utilitzen per a representar als usuaris en diferents escenaris i àmbits, des de la vida personal fins a situacions professionals. En tots estos casos, l'aparició de l'avatar pot tindre un efecte no sols en l'opinió i percepció d'una altra persona, sinó en l'autopercepció, que influïx en l'actitud i el comportament del subjecte. De fet, els avatars sovint s'empren per a obtindre impressions o emocions a través d'expressions no verbals, i poden millorar les interaccions en línia o inclús són útils per a fins educatius o terapèutics. Per tant, la possibilitat de generar avatars d'aspecte realista que provoquen un determinat conjunt d'impressions socials planteja una ferramenta molt interessant i nova, útil en un ampla varietat de camps. Esta tesi proposa un mètode nou per a generar cares d'aspecte realistes amb un perfil social associat que comprén 15 impressions diferents. Per a este propòsit, es van completar diversos objectius parcials. En primer lloc, les característiques facials es van extraure d'una base de dades de cares reals i es van agrupar per aspecte d'una manera automàtica i objectiva emprant tècniques de reducció de dimensionalidad i agrupament. Açò va produir una taxonomia que permet codificar de manera sistemàtica i objectiva les cares d'acord amb els grups obtinguts prèviament. A més, l'ús del mètode proposat no es limita a les característiques facials, i es podria estendre el seu ús per a agrupar automàticament qualsevol altre tipus d'imatges per aparença. En segon lloc, es van trobar les relacions existents entre les diferents característiques facials i les impressions socials. Açò ajuda a saber en quina mesura una determinada característica facial influïx en la percepció d'una determinada impressió social, la qual cosa permet centrar-se en la característica o característiques més importants al dissenyar rostres amb una percepció social desitjada. En tercer lloc, es va implementar un mètode d'edició d'imatges per a generar una cara totalment nova i realista a partir d'una definició de rostre utilitzant la taxonomia de trets facials abans mencionada. Finalment, es va desenrotllar un sistema per a generar cares realistes amb un perfil de tret social associat, la qual cosa complix l'objectiu principal de la present tesi. La principal novetat d'este treball residix en la capacitat de treballar amb diverses dimensions de trets al mateix temps en cares realistes. Per tant, en contrast amb els treballs anteriors que usen imatges amb soroll, o cares de dibuixos animats o sintètiques, el sistema desenrotllat en esta tesi permet generar cares d'aspecte realista triant els nivells desitjats de quinze impressions: Por, Enuig, Atractiu, Cara de xiquet, Disgustat, Dominant, Femení, Feliç, Masculí, Prototípic, Trist, Sorprés, Amenaçador, Confiable i Inusual. Els prometedors resultats obtinguts en esta tesi permetran investigar més a fons com modelar la percepció social en les cares utilitzant un enfocament completFuentes Hurtado, FJ. (2018). A system for modeling social traits in realistic faces with artificial intelligence [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/101943TESI

    EvoDeep: A new evolutionary approach for automatic Deep Neural Networks parametrisation

    Full text link
    [EN] Deep Neural Networks (DNN) have become a powerful, and extremely popular mechanism, which has been widely used to solve problems of varied complexity, due to their ability to make models fitted to non-linear complex problems. Despite its well-known benefits, DNNs are complex learning models whose parametrisation and architecture are made usually by hand. This paper proposes a new Evolutionary Algorithm, named EvoDeep. devoted to evolve the parameters and the architecture of a DNN in order to maximise its classification accuracy, as well as maintaining a valid sequence of layers. This model is tested against a widely used dataset of handwritten digits images. The experiments performed using this dataset show that the Evolutionary Algorithm is able to select the parameters and the DNN architecture appropriately, achieving a 98.93% accuracy in the best run. (C) 2017 Elsevier Inc. All rights reserved.This work has been co-funded by the next research projects: EphemeCH (TIN2014-56494-C4-4-P) and DeepBio (TIN2017-85727-C4-3-P) Spanish Ministry of Economy and Competitivity and European Regional Development Fund FEDER, Justice Programme of the European Union (2014-2020) 723180 -RiskTrack-JUST-2015-JCOO-AG/JUST-2015-JCOO-AG-1, and by the CAM grant S2013/ICE-3095 (CIBERDINE:Cybersecurity, Data and Risks). The contents of this publication are the sole responsibility of their authors and can in no way be taken to reflect the views of the European Commission.Martín, A.; Lara-Cabrera, R.; Fuentes-Hurtado, FJ.; Naranjo Ornedo, V.; Camacho, D. (2018). EvoDeep: A new evolutionary approach for automatic Deep Neural Networks parametrisation. Journal of Parallel and Distributed Computing. 117:180-191. https://doi.org/10.1016/j.jpdc.2017.09.006S18019111

    The Influence of Each Facial Feature on How We Perceive and Interpret Human Faces

    Full text link
    [EN] Facial information is processed by our brain in such a way that we immediately make judgments about, for example, attractiveness or masculinity or interpret personality traits or moods of other people. The appearance of each facial feature has an effect on our perception of facial traits. This research addresses the problem of measuring the size of these effects for five facial features (eyes, eyebrows, nose, mouth, and jaw). Our proposal is a mixed feature-based and image-based approach that allows judgments to be made on complete real faces in the categorization tasks, more than on synthetic, noisy, or partial faces that can influence the assessment. Each facial feature of the faces is automatically classified considering their global appearance using principal component analysis. Using this procedure, we establish a reduced set of relevant specific attributes (each one describing a complete facial feature) to characterize faces. In this way, a more direct link can be established between perceived facial traits and what people intuitively consider an eye, an eyebrow, a nose, a mouth, or a jaw. A set of 92 male faces were classified using this procedure, and the results were related to their scores in 15 perceived facial traits. We show that the relevant features greatly depend on what we are trying to judge. Globally, the eyes have the greatest effect. However, other facial features are more relevant for some judgments like the mouth for happiness and femininity or the nose for dominance.This study was carried out using the Chicago Face Database developed at the University of Chicago by Debbie S. Ma, Joshua Correll, and Bernd Wittenbrink.Diego-Mas, JA.; Fuentes-Hurtado, FJ.; Naranjo Ornedo, V.; Alcañiz Raya, ML. (2020). The Influence of Each Facial Feature on How We Perceive and Interpret Human Faces. i-Perception. 11(5):1-18. https://doi.org/10.1177/2041669520961123S118115Ahonen, T., Hadid, A., & Pietikainen, M. (2006). Face Description with Local Binary Patterns: Application to Face Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(12), 2037-2041. doi:10.1109/tpami.2006.244Axelrod, V., & Yovel, G. (2010). External facial features modify the representation of internal facial features in the fusiform face area. NeuroImage, 52(2), 720-725. doi:10.1016/j.neuroimage.2010.04.027Belhumeur, P. N., Hespanha, J. P., & Kriegman, D. J. (1997). Eigenfaces vs. Fisherfaces: recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7), 711-720. doi:10.1109/34.598228Biederman, I. (1987). Recognition-by-components: A theory of human image understanding. Psychological Review, 94(2), 115-147. doi:10.1037/0033-295x.94.2.115Blais, C., Roy, C., Fiset, D., Arguin, M., & Gosselin, F. (2012). The eyes are not the window to basic emotions. Neuropsychologia, 50(12), 2830-2838. doi:10.1016/j.neuropsychologia.2012.08.010Bovet, J., Barthes, J., Durand, V., Raymond, M., & Alvergne, A. (2012). Men’s Preference for Women’s Facial Features: Testing Homogamy and the Paternity Uncertainty Hypothesis. PLoS ONE, 7(11), e49791. doi:10.1371/journal.pone.0049791Brahnam, S., & Nanni, L. (2010). Predicting trait impressions of faces using local face recognition techniques. Expert Systems with Applications, 37(7), 5086-5093. doi:10.1016/j.eswa.2009.12.002Bruce, V., & Young, A. (1986). Understanding face recognition. British Journal of Psychology, 77(3), 305-327. doi:10.1111/j.2044-8295.1986.tb02199.xCabeza, R., & Kato, T. (2000). Features are Also Important: Contributions of Featural and Configural Processing to Face Recognition. Psychological Science, 11(5), 429-433. doi:10.1111/1467-9280.00283Chihaoui, M., Elkefi, A., Bellil, W., & Ben Amar, C. (2016). A Survey of 2D Face Recognition Techniques. Computers, 5(4), 21. doi:10.3390/computers5040021Cootes, T. F., Edwards, G. J., & Taylor, C. J. (2001). Active appearance models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(6), 681-685. doi:10.1109/34.927467Diamond, R., & Carey, S. (1986). Why faces are and are not special: An effect of expertise. Journal of Experimental Psychology: General, 115(2), 107-117. doi:10.1037/0096-3445.115.2.107Dixson, B. J. W., Sulikowski, D., Gouda‐Vossos, A., Rantala, M. J., & Brooks, R. C. (2016). The masculinity paradox: facial masculinity and beardedness interact to determine women’s ratings of men’s facial attractiveness. Journal of Evolutionary Biology, 29(11), 2311-2320. doi:10.1111/jeb.12958Dunn†, J. C. (1974). Well-Separated Clusters and Optimal Fuzzy Partitions. Journal of Cybernetics, 4(1), 95-104. doi:10.1080/01969727408546059Eberhardt, J. L., Davies, P. G., Purdie-Vaughns, V. J., & Johnson, S. L. (2006). Looking Deathworthy. Psychological Science, 17(5), 383-386. doi:10.1111/j.1467-9280.2006.01716.xFink, B., Neave, N., Manning, J. T., & Grammer, K. (2006). Facial symmetry and judgements of attractiveness, health and personality. Personality and Individual Differences, 41(3), 491-499. doi:10.1016/j.paid.2006.01.017Fox, E., & Damjanovic, L. (2006). The eyes are sufficient to produce a threat superiority effect. Emotion, 6(3), 534-539. doi:10.1037/1528-3542.6.3.534Fuentes-Hurtado, F., Diego-Mas, J. A., Naranjo, V., & Alcañiz, M. (2019). Automatic classification of human facial features based on their appearance. PLOS ONE, 14(1), e0211314. doi:10.1371/journal.pone.0211314Gill, D. (2017). Women and men integrate facial information differently in appraising the beauty of a face. Evolution and Human Behavior, 38(6), 756-760. doi:10.1016/j.evolhumbehav.2017.07.001Gosselin, F., & Schyns, P. G. (2001). Bubbles: a technique to reveal the use of information in recognition tasks. Vision Research, 41(17), 2261-2271. doi:10.1016/s0042-6989(01)00097-9Hagiwara, N., Kashy, D. A., & Cesario, J. (2012). The independent effects of skin tone and facial features on Whites’ affective reactions to Blacks. Journal of Experimental Social Psychology, 48(4), 892-898. doi:10.1016/j.jesp.2012.02.001Hayward, W. G., Rhodes, G., & Schwaninger, A. (2008). An own-race advantage for components as well as configurations in face recognition. Cognition, 106(2), 1017-1027. doi:10.1016/j.cognition.2007.04.002Jack, R. E., & Schyns, P. G. (2015). The Human Face as a Dynamic Tool for Social Communication. Current Biology, 25(14), R621-R634. doi:10.1016/j.cub.2015.05.052Jones, B. C., Little, A. C., Burt, D. M., & Perrett, D. I. (2004). When Facial Attractiveness is Only Skin Deep. Perception, 33(5), 569-576. doi:10.1068/p3463Kanwisher, N., McDermott, J., & Chun, M. M. (1997). The Fusiform Face Area: A Module in Human Extrastriate Cortex Specialized for Face Perception. The Journal of Neuroscience, 17(11), 4302-4311. doi:10.1523/jneurosci.17-11-04302.1997Keating, C. F., & Doyle, J. (2002). The faces of desirable mates and dates contain mixed social status cues. Journal of Experimental Social Psychology, 38(4), 414-424. doi:10.1016/s0022-1031(02)00007-0Keil, M. S. (2009). «I Look in Your Eyes, Honey»: Internal Face Features Induce Spatial Frequency Preference for Human Face Processing. PLoS Computational Biology, 5(3), e1000329. doi:10.1371/journal.pcbi.1000329Kwart, D. G., Foulsham, T., & Kingstone, A. (2012). Age and Beauty are in the Eye of the Beholder. Perception, 41(8), 925-938. doi:10.1068/p7136Langlois, J. H., Kalakanis, L., Rubenstein, A. J., Larson, A., Hallam, M., & Smoot, M. (2000). Maxims or myths of beauty? A meta-analytic and theoretical review. Psychological Bulletin, 126(3), 390-423. doi:10.1037/0033-2909.126.3.390Levine, T. R., & Hullett, C. R. (2002). Eta Squared, Partial Eta Squared, and Misreporting of Effect Size in Communication Research. Human Communication Research, 28(4), 612-625. doi:10.1111/j.1468-2958.2002.tb00828.xLittle, A. C., Burriss, R. P., Jones, B. C., & Roberts, S. C. (2007). Facial appearance affects voting decisions. Evolution and Human Behavior, 28(1), 18-27. doi:10.1016/j.evolhumbehav.2006.09.002Lundqvist, D., Esteves, F., & Ohman, A. (1999). The Face of Wrath: Critical Features for Conveying Facial Threat. Cognition & Emotion, 13(6), 691-711. doi:10.1080/026999399379041Ma, D. S., Correll, J., & Wittenbrink, B. (2015). The Chicago face database: A free stimulus set of faces and norming data. Behavior Research Methods, 47(4), 1122-1135. doi:10.3758/s13428-014-0532-5Maloney, L. T., & Dal Martello, M. F. (2006). Kin recognition and the perceived facial similarity of children. Journal of Vision, 6(10), 4. doi:10.1167/6.10.4McKone, E., & Yovel, G. (2009). Why does picture-plane inversion sometimes dissociate perception of features and spacing in faces, and sometimes not? Toward a new theory of holistic processing. Psychonomic Bulletin & Review, 16(5), 778-797. doi:10.3758/pbr.16.5.778Meyers, E., & Wolf, L. (2007). Using Biologically Inspired Features for Face Processing. International Journal of Computer Vision, 76(1), 93-104. doi:10.1007/s11263-007-0058-8Miller, G. A. (1994). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 101(2), 343-352. doi:10.1037/0033-295x.101.2.343Pallett, P. M., Link, S., & Lee, K. (2010). New «golden» ratios for facial beauty. Vision Research, 50(2), 149-154. doi:10.1016/j.visres.2009.11.003Paunonen, S. V., Ewan, K., Earthy, J., Lefave, S., & Goldberg, H. (1999). Facial Features as Personality Cues. Journal of Personality, 67(3), 555-583. doi:10.1111/1467-6494.00065Petrican, R., Todorov, A., & Grady, C. (2014). Personality at Face Value: Facial Appearance Predicts Self and Other Personality Judgments among Strangers and Spouses. Journal of Nonverbal Behavior, 38(2), 259-277. doi:10.1007/s10919-014-0175-3Piepers, D. W., & Robbins, R. A. (2012). A Review and Clarification of the Terms «holistic,» «configural,» and «relational» in the Face Perception Literature. Frontiers in Psychology, 3. doi:10.3389/fpsyg.2012.00559Rakover, S. S. (2002). Featural vs. configurational information in faces: A conceptual and empirical analysis. British Journal of Psychology, 93(1), 1-30. doi:10.1348/000712602162427Rhodes, G., Ewing, L., Hayward, W. G., Maurer, D., Mondloch, C. J., & Tanaka, J. W. (2009). Contact and other-race effects in configural and component processing of faces. British Journal of Psychology, 100(4), 717-728. doi:10.1348/000712608x396503Richardson, J. T. E. (2011). Eta squared and partial eta squared as measures of effect size in educational research. Educational Research Review, 6(2), 135-147. doi:10.1016/j.edurev.2010.12.001Ritz-Timme, S., Gabriel, P., Obertovà, Z., Boguslawski, M., Mayer, F., Drabik, A., … Cattaneo, C. (2010). A new atlas for the evaluation of facial features: advantages, limits, and applicability. International Journal of Legal Medicine, 125(2), 301-306. doi:10.1007/s00414-010-0446-4Rojas Q., M., Masip, D., Todorov, A., & Vitria, J. (2011). Automatic Prediction of Facial Trait Judgments: Appearance vs. Structural Models. PLoS ONE, 6(8), e23323. doi:10.1371/journal.pone.0023323Rossion, B. (2008). Picture-plane inversion leads to qualitative changes of face perception. Acta Psychologica, 128(2), 274-289. doi:10.1016/j.actpsy.2008.02.003Russell, R. (2003). Sex, Beauty, and the Relative Luminance of Facial Features. Perception, 32(9), 1093-1107. doi:10.1068/p5101Saavedra, C., Smith, P., & Peissig, J. (2013). The Relative Role of Eyes, Eyebrows, and Eye Region in Face Recognition. Journal of Vision, 13(9), 410-410. doi:10.1167/13.9.410Sadr, J., Jarudi, I., & Sinha, P. (2003). The Role of Eyebrows in Face Recognition. Perception, 32(3), 285-293. doi:10.1068/p5027Said, C., Sebe, N., & Todorov, A. (2009). «Structural resemblance to emotional expressions predicts evaluation of emotionally neutral faces»: Correction to Said, Sebe, and Todorov (2009). Emotion, 9(4), 509-509. doi:10.1037/a0016784Scharff, A., Palmer, J., & Moore, C. M. (2011). Evidence of fixed capacity in visual object categorization. Psychonomic Bulletin & Review, 18(4), 713-721. doi:10.3758/s13423-011-0101-1Schobert, A.-K., Corradi-Dell’Acqua, C., Frühholz, S., van der Zwaag, W., & Vuilleumier, P. (2017). Functional organization of face processing in the human superior temporal sulcus: a 7T high-resolution fMRI study. Social Cognitive and Affective Neuroscience, 13(1), 102-113. doi:10.1093/scan/nsx119Sirovich, L., & Kirby, M. (1987). Low-dimensional procedure for the characterization of human faces. Journal of the Optical Society of America A, 4(3), 519. doi:10.1364/josaa.4.000519Tanaka, J. W., & Farah, M. J. (1993). Parts and Wholes in Face Recognition. The Quarterly Journal of Experimental Psychology Section A, 46(2), 225-245. doi:10.1080/14640749308401045Taubert, J., Apthorp, D., Aagten-Murphy, D., & Alais, D. (2011). The role of holistic processing in face perception: Evidence from the face inversion effect. Vision Research, 51(11), 1273-1278. doi:10.1016/j.visres.2011.04.002Terry, R. L. (1977). Further Evidence on Components of Facial Attractiveness. Perceptual and Motor Skills, 45(1), 130-130. doi:10.2466/pms.1977.45.1.130Todorov, A., Dotsch, R., Wigboldus, D. H. J., & Said, C. P. (2011). Data-driven Methods for Modeling Social Perception. Social and Personality Psychology Compass, 5(10), 775-791. doi:10.1111/j.1751-9004.2011.00389.xTodorov, A., Mandisodza, A. N., Goren, A., & Hall, C. C. (2005). Inferences of Competence from Faces Predict Election Outcomes. Science, 308(5728), 1623-1626. doi:10.1126/science.1110589Todorov, A., Said, C. P., Engell, A. D., & Oosterhof, N. N. (2008). Understanding evaluation of faces on social dimensions. Trends in Cognitive Sciences, 12(12), 455-460. doi:10.1016/j.tics.2008.10.001Tsankova, E., & Kappas, A. (2015). Facial Skin Smoothness as an Indicator of Perceived Trustworthiness and Related Traits. Perception, 45(4), 400-408. doi:10.1177/0301006615616748Turk, M., & Pentland, A. (1991). Eigenfaces for Recognition. Journal of Cognitive Neuroscience, 3(1), 71-86. doi:10.1162/jocn.1991.3.1.71Wang, R., Li, J., Fang, H., Tian, M., & Liu, J. (2012). Individual Differences in Holistic Processing Predict Face Recognition Ability. Psychological Science, 23(2), 169-177. doi:10.1177/0956797611420575Wilson, J. P., & Rule, N. O. (2015). Facial Trustworthiness Predicts Extreme Criminal-Sentencing Outcomes. Psychological Science, 26(8), 1325-1331. doi:10.1177/0956797615590992Yamaguchi, M. K., Hirukawa, T., & Kanazawa, S. (2013). Judgment of Gender through Facial Parts. Perception, 42(11), 1253-1265. doi:10.1068/p240563nMcArthur, L. Z., & Baron, R. M. (1983). Toward an ecological theory of social perception. Psychological Review, 90(3), 215-238. doi:10.1037/0033-295x.90.3.21

    Evolutionary Computation for Modelling Social Traits in Realistic Looking Synthetic Faces

    Full text link
    [EN] Human faces play a central role in our lives. Thanks to our behavioural capacity to perceive faces, how a face looks in a painting, a movie, or an advertisement can dramatically influence what we feel about them and what emotions are elicited. Facial information is processed by our brain in such a way that we immediately make judgements like attractiveness or masculinity or interpret personality traits or moods of other people. Due to the importance of appearance-driven judgements of faces, this has become a major focus not only for psychological research, but for neuroscientists, artists, engineers, and software developers. New technologies are now able to create realistic looking synthetic faces that are used in arts, online activities, advertisement, or movies. However, there is not a method to generate virtual faces that convey the desired sensations to the observers. In this work, we present a genetic algorithm based procedure to create realistic faces combining facial features in the adequate relative positions. A model of how observers will perceive a face based on its features' appearances and relative positions was developed and used as the fitness function of the algorithm. The model is able to predict 15 facial social traits related to aesthetic, moods, and personality. The proposed procedure was validated comparing its results with the opinion of human observers. This procedure is useful not only for creating characters with artistic purposes, but also for online activities, advertising, surgery, or criminology.Fuentes-Hurtado, FJ.; Diego-Mas, JA.; Naranjo Ornedo, V.; Alcañiz Raya, ML. (2018). Evolutionary Computation for Modelling Social Traits in Realistic Looking Synthetic Faces. Complexity. 1-16. https://doi.org/10.1155/2018/9270152S11

    A hybrid method for accurate iris segmentation on at-a-distance visible-wavelength images

    Full text link
    [EN] This work describes a new hybrid method for accurate iris segmentation from full-face images independently of the ethnicity of the subject. It is based on a combination of three methods: facial key-point detection, integro-differential operator (IDO) and mathematical morphology. First, facial landmarks are extracted by means of the Chehra algorithm in order to obtain the eye location. Then, the IDO is applied to the extracted sub-image containing only the eye in order to locate the iris. Once the iris is located, a series of mathematical morphological operations is performed in order to accurately segment it. Results are obtained and compared among four different ethnicities (Asian, Black, Latino and White) as well as with two other iris segmentation algorithms. In addition, robustness against rotation, blurring and noise is also assessed. Our method obtains state-of-the-art performance and shows itself robust with small amounts of blur, noise and/or rotation. Furthermore, it is fast, accurate, and its code is publicly available.Fuentes-Hurtado, FJ.; Naranjo Ornedo, V.; Diego-Mas, JA.; Alcañiz Raya, ML. (2019). A hybrid method for accurate iris segmentation on at-a-distance visible-wavelength images. EURASIP Journal on Image and Video Processing (Online). 2019(1):1-14. https://doi.org/10.1186/s13640-019-0473-0S11420191A. Radman, K. Jumari, N. Zainal, Fast and reliable iris segmentation algorithm. IET Image Process.7(1), 42–49 (2013).M. Erbilek, M. Fairhurst, M. C. D. C Abreu, in 5th International Conference on Imaging for Crime Detection and Prevention (ICDP 2013). Age prediction from iris biometrics (London, 2013), pp. 1–5. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6913712&isnumber=6867223 .A. Abbasi, M. Khan, Iris-pupil thickness based method for determining age group of a person. Int. Arab J. Inf. Technol. (IAJIT). 13(6) (2016).G. Mabuza-Hocquet, F. Nelwamondo, T. Marwala, in Intelligent Information and Database Systems. ed. by N. Nguyen, S. Tojo, L. Nguyen, B. Trawiński. Ethnicity Distinctiveness Through Iris Texture Features Using Gabor Filters. ACIIDS 2017. Lecture Notes in Computer Science, vol. 10192 (Springer, Cham, 2017).S. Lagree, K. W. Bowyer, in 2011 IEEE International Conference on Technologies for Homeland Security (HST). Predicting ethnicity and gender from iris texture (IEEEWaltham, 2011). p. 440–445. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6107909&isnumber=6107829 .J. G. Daugman, High confidence visual recognition of persons by a test of statistical independence. IEEE Trans. Pattern Anal. Mach. Intell.15(11), 1148–1161 (1993).N. Kourkoumelis, M. Tzaphlidou. Medical Safety Issues Concerning the Use of Incoherent Infrared Light in Biometrics, eds. A. Kumar, D. Zhang. Ethics and Policy of Biometrics. ICEB 2010. Lecture Notes in Computer Science, vol 6005 (Springer, Berlin, Heidelberg, 2010).R. P. Wildes, Iris recognition: an emerging biometric technology. Proc. IEEE. 85(9), 1348–1363 (1997).M. Kass, A. Witkin, D. Terzopoulos, Snakes: Active contour models. Int. J. Comput. Vision. 1(4), 321–331 (1988).S. J. Pundlik, D. L. Woodard, S. T. Birchfield, in 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. Non-ideal iris segmentation using graph cuts (IEEEAnchorage, 2008). p. 1–6. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4563108&isnumber=4562948 .H. Proença, Iris recognition: On the segmentation of degraded images acquired in the visible wavelength. IEEE Trans. Pattern Anal. Mach. Intell.32(8), 1502–1516 (2010). http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5156505&isnumber=5487331 .T. Tan, Z. He, Z. Sun, Efficient and robust segmentation of noisy iris images for non-cooperative iris recognition. Image Vision Comput.28(2), 223–230 (2010).C. -W. Tan, A. Kumar, in CVPR 2011 WORKSHOPS. Automated segmentation of iris images using visible wavelength face images (Colorado Springs, 2011). p. 9–14. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5981682&isnumber=5981671 .Y. -H. Li, M. Savvides, An automatic iris occlusion estimation method based on high-dimensional density estimation. IEEE Trans. Pattern Anal. Mach. Intell.35(4), 784–796 (2013).M. Yahiaoui, E. Monfrini, B. Dorizzi, Markov chains for unsupervised segmentation of degraded nir iris images for person recognition. Pattern Recogn. Lett.82:, 116–123 (2016).A. Radman, N. Zainal, S. A. Suandi, Automated segmentation of iris images acquired in an unconstrained environment using hog-svm and growcut. Digit. Signal Proc.64:, 60–70 (2017).N. Liu, H. Li, M. Zhang, J. Liu, Z. Sun, T. Tan, in 2016 International Conference on Biometrics (ICB). Accurate iris segmentation in non-cooperative environments using fully convolutional networks (Halmstad, 2016). p. 1–8. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7550055&isnumber=7550036 .Z. Zhao, A. Kumar, in 2017 IEEE International Conference on Computer Vision (ICCV). Towards more accurate iris recognition using deeply learned spatially corresponding features (Venice, 2017). p. 3829–3838. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8237673&isnumber=8237262 .P. Li, X. Liu, L. Xiao, Q. Song, Robust and accurate iris segmentation in very noisy iris images. Image Vision Comput.28(2), 246–253 (2010).D. S. Jeong, J. W. Hwang, B. J. Kang, K. R. Park, C. S. Won, D. -K. Park, J. Kim, A new iris segmentation method for non-ideal iris images. Image Vision Comput.28(2), 254–260 (2010).Y. Chen, M. Adjouadi, C. Han, J. Wang, A. Barreto, N. Rishe, J. Andrian, A highly accurate and computationally efficient approach for unconstrained iris segmentation. Image Vision Comput. 28(2), 261–269 (2010).Z. Zhao, A. Kumar, in 2015 IEEE International Conference on Computer Vision (ICCV). An accurate iris segmentation framework under relaxed imaging constraints using total variation model (Santiago, 2015). p. 3828–3836. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7410793&isnumber=7410356 .Y. Hu, K. Sirlantzis, G. Howells, Improving colour iris segmentation using a model selection technique. Pattern Recogn. Lett.57:, 24–32 (2015).E. Ouabida, A. Essadique, A. Bouzid, Vander lugt correlator based active contours for iris segmentation and tracking. Expert Systems Appl.71:, 383–395 (2017).C. -W. Tan, A. Kumar, Unified framework for automated iris segmentation using distantly acquired face images. IEEE Trans. Image Proc.21(9), 4068–4079 (2012).C. -W. Tan, A. Kumar, in Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012). Human identification from at-a-distance images by simultaneously exploiting iris and periocular features (Tsukuba, 2012). p. 553–556. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6460194&isnumber=6460043 .C. -W. Tan, A. Kumar, Towards online iris and periocular recognition under relaxed imaging constraints. IEEE Trans. Image Proc.22(10), 3751–3765 (2013).K. Y. Shin, Y. G. Kim, K. R. Park, Enhanced iris recognition method based on multi-unit iris images. Opt. Eng.52(4), 047201–047201 (2013).CASIA iris databases. http://biometrics.idealtest.org/ . Accessed 06 Sept 2017.WVU iris databases. hhttp://biic.wvu.edu/data-sets/synthetic-iris-dataset . Accessed 06 Sept 2017.UBIRIS iris database. http://iris.di.ubi.pt . Accessed 06 Sept 2017.MICHE iris database. http://biplab.unisa.it/MICHE/ . Accessed 06 Sept 2017.P. J. Phillips, et al, in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), 1. Overview of the face recognition grand challenge (San Diego, 2005). p. 947–954. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1467368&isnumber=31472 .D. S. Ma, J. Correll, B. Wittenbrink, The chicago face database: A free stimulus set of faces and norming data. Behav. Res. Methods. 47(4), 1122–1135 (2015).P. Soille, Morphological Image Analysis: Principles and Applications (Springer, 2013).A. K. Jain, Fundamentals of Digital Image Processing (Prentice-Hall, Inc., Englewood Cliffs, 1989).J. Daugman, How iris recognition works. IEEE Trans. Circ. Syst. Video Technol.14(1), 21–30 (2004).A. Asthana, S. Zafeiriou, S. Cheng, M. Pantic, in 2014 IEEE Conference on Computer Vision and Pattern Recognition. Incremental face alignment in the wild (Columbus, 2014). p. 1859–1866. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6909636&isnumber=6909393 .T. Baltrusaitis, P. Robinson, L. -P. Morency, in 2013 IEEE International Conference on Computer Vision Workshops. Constrained local neural fields for robust facial landmark detection in the wild (Sydney, 2013). p. 354–361. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6755919&isnumber=6755862 .X. Zhu, D. Ramanan, in Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference On. Face detection, pose estimation, and landmark localization in the wild (IEEEBerlin Heidelberg, 2012), pp. 2879–2886.G. Tzimiropoulos, in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Project-out cascaded regression with an application to face alignment (Boston, 2015). p. 3659–3667. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7298989&isnumber=7298593 .H. Hofbauer, F. Alonso-Fernandez, P. Wild, J. Bigun, A. Uhl, in 2014 22nd International Conference on Pattern Recognition. A ground truth for iris segmentation (Stockholm, 2014). p. 527–532. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6976811&isnumber=6976709 .H. Proença, L. A. Alexandre, in 2007 First IEEE International Conference on Biometrics: Theory, Applications, and Systems. The NICE.I: Noisy Iris Challenge Evaluation - Part I (Crystal City, 2007). p. 1–4. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4401910&isnumber=4401902 .J. Daugman, in European Convention on Security and Detection. High confidence recognition of persons by rapid video analysis of iris texture, (1995). p. 244–251. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=491729&isnumber=10615 .Code of Matlab implementation of Daugman’s integro-differential operator (IDO). https://es.mathworks.com/matlabcentral/fileexchange/15652-iris-segmentation-using-daugman-s-integrodifferential-operator/ . Accessed 06 Sept 2017.Code of Matlab implementation of Zhao and Kumar’s iris segmentation framework under relaxed imaging constraints using total variation model. http://www4.comp.polyu.edu.hk/~csajaykr/tvmiris.htm/ . Accessed 06 Sept 2017.Code of Matlab implementation of presented work. https://gitlab.com/ffuentes/hybrid_iris_segmentation/ . Accessed 06 Sept 2017.Face and eye detection with OpenCV. https://docs.opencv.org/trunk/d7/d8b/tutorial_py_face_detection.html . Accessed 07 Sept 2018.A. K. Boyat, B. K. Joshi, 6. A review paper:noise models in digital image processing signal & image processing. An International Journal (SIPIJ), (2015), pp. 63–75. https://doi.org/10.5121/sipij.2015.6206 .A. Buades, Y. Lou, J. M. Morel, Z. Tang, Multi image noise estimation and denoising (2010). Available: https://hal.archives-ouvertes.fr/hal-00510866/

    A Comparison of Physiological Signal Analysis Techniques and Classifiers for Automatic Emotional Evaluation of Audiovisual Contents

    Get PDF
    This Document is Protected by copyright and was first published by Frontiers. All rights reserved. it is reproduced with permissionThis work focuses on finding the most discriminatory or representative features that allow to classify commercials according to negative, neutral and positive effectiveness based on the Ace Score index. For this purpose, an experiment involving forty-seven participants was carried out. In this experiment electroencephalography (EEG), electrocardiography (ECG), Galvanic Skin Response (GSR) and respiration data were acquired while subjects were watching a 30-min audiovisual content. This content was composed by a submarine documentary and nine commercials (one of themthe ad under evaluation). After the signal pre-processing, four sets of features were extracted from the physiological signals using different state-of-the-art metrics. These features computed in time and frequency domains are the inputs to several basic and advanced classifiers. An average of 89.76% of the instances was correctly classified according to the Ace Score index. The best results were obtained by a classifier consisting of a combination between AdaBoost and RandomForest with automatic selection of features. The selected features were those extracted from GSR and HRV signals. These results are promising in the audiovisual content evaluation field by means of physiological signal processing.This work has been supported by the Heineken Endowed Chair in Neuromarketing at the Universitat Politecnica de Valencia in order to research and apply new technologies and neuroscience in communication, distribution and consumption fields.Colomer Granero, A.; Fuentes-Hurtado, FJ.; Naranjo Ornedo, V.; Guixeres Provinciale, J.; Ausin-Azofra, JM.; Alcañiz Raya, ML. (2016). A Comparison of Physiological Signal Analysis Techniques and Classifiers for Automatic Emotional Evaluation of Audiovisual Contents. Frontiers in Computational Neuroscience. 10(74):1-16. doi:10.3389/fncom.2016.00074S116107

    Congreso Internacional de Responsabilidad Social Apuestas para el desarrollo regional.

    Get PDF
    Congreso Internacional de Responsabilidad Social: apuestas para el desarrollo regional [Edición 1 / Nov. 6 - 7: 2019 Bogotá D.C.]El Congreso Internacional de Responsabilidad Social “Apuestas para el Desarrollo Regional”, se llevó a cabo los días 6 y 7 de noviembre de 2019 en la ciudad de Bogotá D.C. como un evento académico e investigativo liderado por la Corporación Universitaria Minuto de Dios -UNIMINUTO – Rectoría Cundinamarca cuya pretensión fue el fomento de nuevos paradigmas, la divulgación de conocimiento renovado en torno a la Responsabilidad Social; finalidad adoptada institucionalmente como postura ética y política que impacta la docencia, la investigación y la proyección social, y cuyo propósito central es la promoción de una “sensibilización consciente y crítica ante las situaciones problemáticas, tanto de las comunidades como del país, al igual que la adquisición de unas competencias orientadas a la promoción y al compromiso con el desarrollo humano y social integral”. (UNIMINUTO, 2014). Dicha postura, de conciencia crítica y sensibilización social, sumada a la experiencia adquirida mediante el trabajo articulado con otras instituciones de índole académico y de forma directa con las comunidades, permitió establecer como objetivo central del evento la reflexión de los diferentes grupos de interés, la gestión de sus impactos como elementos puntuales que contribuyeron en la audiencia a la toma de conciencia frente al papel que se debe asumir a favor de la responsabilidad social como aporte seguro al desarrollo regional y a su vez al fortalecimiento de los Objetivos de Desarrollo Sostenible

    Congreso Internacional de Responsabilidad Social Apuestas para el desarrollo regional.

    Get PDF
    Congreso Internacional de Responsabilidad Social: apuestas para el desarrollo regional [Edición 1 / Nov. 6 - 7: 2019 Bogotá D.C.]El Congreso Internacional de Responsabilidad Social “Apuestas para el Desarrollo Regional”, se llevó a cabo los días 6 y 7 de noviembre de 2019 en la ciudad de Bogotá D.C. como un evento académico e investigativo liderado por la Corporación Universitaria Minuto de Dios -UNIMINUTO – Rectoría Cundinamarca cuya pretensión fue el fomento de nuevos paradigmas, la divulgación de conocimiento renovado en torno a la Responsabilidad Social; finalidad adoptada institucionalmente como postura ética y política que impacta la docencia, la investigación y la proyección social, y cuyo propósito central es la promoción de una “sensibilización consciente y crítica ante las situaciones problemáticas, tanto de las comunidades como del país, al igual que la adquisición de unas competencias orientadas a la promoción y al compromiso con el desarrollo humano y social integral”. (UNIMINUTO, 2014). Dicha postura, de conciencia crítica y sensibilización social, sumada a la experiencia adquirida mediante el trabajo articulado con otras instituciones de índole académico y de forma directa con las comunidades, permitió establecer como objetivo central del evento la reflexión de los diferentes grupos de interés, la gestión de sus impactos como elementos puntuales que contribuyeron en la audiencia a la toma de conciencia frente al papel que se debe asumir a favor de la responsabilidad social como aporte seguro al desarrollo regional y a su vez al fortalecimiento de los Objetivos de Desarrollo Sostenible

    Desarrollo de un modelo anisotrópico para la simulación electrofisiológica auricular

    Full text link
    Los métodos de adquisición de potencial cardíaco existentes actualmente son limitados. El electrocardiograma, el navegador intracavitario o el BSPM no son capaces de obtener un mapa de potenciales completo del corazón. Con este proyecto se desarrolla un modelo auricular con el que poder observar la evolución del potencial en cualquier punto de las aurículas y en cualquier instante del tiempo, mediante la simulación, y de esta forma poder estudiar mejor tanto comportamientos normales como patológicos de la aurícula.Existent potential acquisition methods are limited. The electrocardiogram, the intra-cavitary navigator and the BSPM are not able to obtain a complete potentials map of the heart. With this project we develop an anisotropic atrial model with which we are able to simulate and watch the potential evolution in every cell of the atrial and in every instant. That way, we are able to study in a better way normal and pathological behavior of the heart.Fuentes Hurtado, FJ. (2013). Desarrollo de un modelo anisotrópico para la simulación electrofisiológica auricular. http://hdl.handle.net/10251/33059.Archivo delegad

    Improving the quality of image generation in art with top-k training and cyclic generative methods

    No full text
    Abstract The creation of artistic images through the use of Artificial Intelligence is an area that has been gaining interest in recent years. In particular, the ability of Neural Networks to separate and subsequently recombine the style of different images, generating a new artistic image with the desired style, has been a source of study and attraction for the academic and industrial community. This work addresses the challenge of generating artistic images that are framed in the style of pictorial Impressionism and, specifically, that imitate the style of one of its greatest exponents, the painter Claude Monet. After having analysed several theoretical approaches, the Cycle Generative Adversarial Networks are chosen as base model. From this point, a new training methodology which has not been applied to cyclical systems so far, the top-k approach, is implemented. The proposed system is characterised by using in each iteration of the training those k images that, in the previous iteration, have been able to better imitate the artist’s style. To evaluate the performance of the proposed methods, the results obtained with both methodologies, basic and top-k, have been analysed from both a quantitative and qualitative perspective. Both evaluation methods demonstrate that the proposed top-k approach recreates the author’s style in a more successful manner and, at the same time, also demonstrate the ability of Artificial Intelligence to generate something as creative as impressionist paintings
    corecore